Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
This paper examines procedural and conditional metacognitive knowledge and student motivation across two ITSs (logic and probability). Students were categorized by metacognitive knowledge and motivation level. Interventions (nudges and worked examples) supported backward-chaining strategy. Results led to an MMI framework combining metacognitive instruction, motivation, and prompting to support effective knowledge transfer.more » « less
-
Evaluates DKT models’ ability to track individual knowledge components (KCs) in programming tasks. Proposes two enhancements—adding an explicit KC layer and code features—and shows that the KC layer yields modest improvements in KC-level interpretability, especially when tracking incorrect submissions.more » « less
-
Proposes a DRL-based pedagogical policy to choose when to present or skip training problems in a logic tutor. Four conditions are compared: control, adaptive DRL, random skipping, and DRL with worked-example choice. DRL policy reduces training time while maintaining posttest performance.more » « less
-
A key challenge in e-learning environments like Intelligent Tutoring Systems (ITSs) is to induce effective pedagogical policies efficiently. While Deep Reinforcement Learning (DRL) often suffers from \textbf{\emph{sample inefficiency}} and \textbf{\emph{reward function}} design difficulty, Apprenticeship Learning (AL) algorithms can overcome them. However, most AL algorithms can not handle heterogeneity as they assume all demonstrations are generated with a homogeneous policy driven by a single reward function. Still, some AL algorithms which consider heterogeneity, often can not generalize to large continuous state space and only work with discrete states. In this paper, we propose an expectation-maximization(EM)-EDM, a general AL framework to induce effective pedagogical policies from given optimal or near-optimal demonstrations, which are assumed to be driven by heterogeneous reward functions. We compare the effectiveness of the policies induced by our proposed EM-EDM against four AL-based baselines and two policies induced by DRL on two different but related tasks that involve pedagogical action prediction. Our overall results showed that, for both tasks, EM-EDM outperforms the four AL baselines across all performance metrics and the two DRL baselines. This suggests that EM-EDM can effectively model complex student pedagogical decision-making processes through the ability to manage a large, continuous state space and adapt to handle diverse and heterogeneous reward functions with very few given demonstrations.more » « less
-
Learning to derive subgoals reduces the gap between experts and students and prepares students for future problem solving. This paper explores a training strategy using backward worked examples (BWE) and backward problem solving (BPS) within an intelligent logic tutor to support backward strategy learning, with analysis of student experience, performance, and proof construction. Results show that students trained with both BWE and BPS outperform those receiving none or only BWE, demonstrating more efficient subgoal derivation.more » « less
An official website of the United States government

Full Text Available